learning model confidence
Addressing Failure Prediction by Learning Model Confidence
Assessing reliably the confidence of a deep neural net and predicting its failures is of primary importance for the practical deployment of these models. In this paper, we propose a new target criterion for model confidence, corresponding to the True Class Probability (TCP). We show how using the TCP is more suited than relying on the classic Maximum Class Probability (MCP). We provide in addition theoretical guarantees for TCP in the context of failure prediction. Since the true class is by essence unknown at test time, we propose to learn TCP criterion on the training set, introducing a specific learning scheme adapted to this context. Extensive experiments are conducted for validating the relevance of the proposed approach. We study various network architectures, small and large scale datasets for image classification and semantic segmentation. We show that our approach consistently outperforms several strong methods, from MCP to Bayesian uncertainty, as well as recent approaches specifically designed for failure prediction.
Reviews: Addressing Failure Prediction by Learning Model Confidence
Originality: The use of the True Class Probability (TCP) is novel in the area of uncertainty estimation for the task of detecting misclassifications. However, as the TCP cannot be known at test time, the authors use an additional network component to predict this score based intermediate features derived from the predictive model. This is very similar to the task of confidence score estimation in speech recognition (https://arxiv.org/abs/1810.13024, While the use of the TCP as a target is novel, confidence score estimation is not, thus the work has limited novelty. Quality: The paper is technically sound, experiments are sensible and in line with standard practice in the area. The authors are generally honest about the evaluation of their work, however, they do not analyse the limitations of the given approach.
Addressing Failure Prediction by Learning Model Confidence
Assessing reliably the confidence of a deep neural net and predicting its failures is of primary importance for the practical deployment of these models. In this paper, we propose a new target criterion for model confidence, corresponding to the True Class Probability (TCP). We show how using the TCP is more suited than relying on the classic Maximum Class Probability (MCP). We provide in addition theoretical guarantees for TCP in the context of failure prediction. Since the true class is by essence unknown at test time, we propose to learn TCP criterion on the training set, introducing a specific learning scheme adapted to this context.
Addressing Failure Prediction by Learning Model Confidence
Corbière, Charles, THOME, Nicolas, Bar-Hen, Avner, Cord, Matthieu, Pérez, Patrick
Assessing reliably the confidence of a deep neural net and predicting its failures is of primary importance for the practical deployment of these models. In this paper, we propose a new target criterion for model confidence, corresponding to the True Class Probability (TCP). We show how using the TCP is more suited than relying on the classic Maximum Class Probability (MCP). We provide in addition theoretical guarantees for TCP in the context of failure prediction. Since the true class is by essence unknown at test time, we propose to learn TCP criterion on the training set, introducing a specific learning scheme adapted to this context.